4 research outputs found

    DeepScribe: Localization and Classification of Elamite Cuneiform Signs Via Deep Learning

    Full text link
    Twenty-five hundred years ago, the paperwork of the Achaemenid Empire was recorded on clay tablets. In 1933, archaeologists from the University of Chicago's Oriental Institute (OI) found tens of thousands of these tablets and fragments during the excavation of Persepolis. Many of these tablets have been painstakingly photographed and annotated by expert cuneiformists, and now provide a rich dataset consisting of over 5,000 annotated tablet images and 100,000 cuneiform sign bounding boxes. We leverage this dataset to develop DeepScribe, a modular computer vision pipeline capable of localizing cuneiform signs and providing suggestions for the identity of each sign. We investigate the difficulty of learning subtasks relevant to cuneiform tablet transcription on ground-truth data, finding that a RetinaNet object detector can achieve a localization mAP of 0.78 and a ResNet classifier can achieve a top-5 sign classification accuracy of 0.89. The end-to-end pipeline achieves a top-5 classification accuracy of 0.80. As part of the classification module, DeepScribe groups cuneiform signs into morphological clusters. We consider how this automatic clustering approach differs from the organization of standard, printed sign lists and what we may learn from it. These components, trained individually, are sufficient to produce a system that can analyze photos of cuneiform tablets from the Achaemenid period and provide useful transliteration suggestions to researchers. We evaluate the model's end-to-end performance on locating and classifying signs, providing a roadmap to a linguistically-aware transliteration system, then consider the model's potential utility when applied to other periods of cuneiform writing.Comment: Currently under review in the ACM JOCC

    Graph Data-Models and Semantic Web Technologies in Scholarly Digital Editing

    Get PDF
    This volume is based on the selected papers presented at the Workshop on Scholarly Digital Editions, Graph Data-Models and Semantic Web Technologies, held at the Uni- versity of Lausanne in June 2019. The Workshop was organized by Elena Spadini (University of Lausanne) and Francesca Tomasi (University of Bologna), and spon- sored by the Swiss National Science Foundation through a Scientific Exchange grant, and by the Centre de recherche sur les lettres romandes of the University of Lausanne. The Workshop comprised two full days of vibrant discussions among the invited speakers, the authors of the selected papers, and other participants.1 The acceptance rate following the open call for papers was around 60%. All authors – both selected and invited speakers – were asked to provide a short paper two months before the Workshop. The authors were then paired up, and each pair exchanged papers. Paired authors prepared questions for one another, which were to be addressed during the talks at the Workshop; in this way, conversations started well before the Workshop itself. After the Workshop, the papers underwent a second round of peer-review before inclusion in this volume. This time, the relevance of the papers was not under discus- sion, but reviewers were asked to appraise specific aspects of each contribution, such as its originality or level of innovation, its methodological accuracy and knowledge of the literature, as well as more formal parameters such as completeness, clarity, and coherence. The bibliography of all of the papers is collected in the public Zotero group library GraphSDE20192, which has been used to generate the reference list for each contribution in this volume. The invited speakers came from a wide range of backgrounds (academic, commer- cial, and research institutions) and represented the different actors involved in the remediation of our cultural heritage in the form of graphs and/or in a semantic web en- vironment. Georg Vogeler (University of Graz) and Ronald Haentjens Dekker (Royal Dutch Academy of Sciences, Humanities Cluster) brought the Digital Humanities research perspective; the work of Hans Cools and Roberta Laura Padlina (University of Basel, National Infrastructure for Editions), as well as of Tobias Schweizer and Sepi- deh Alassi (University of Basel, Digital Humanities Lab), focused on infrastructural challenges and the development of conceptual and software frameworks to support re- searchers’ needs; Michele Pasin’s contribution (Digital Science, Springer Nature) was informed by his experiences in both academic research, and in commercial technology companies that provide services for the scientific community. The Workshop featured not only the papers of the selected authors and of the invited speakers, but also moments of discussion between interested participants. In addition to the common Q&A time, during the second day one entire session was allocated to working groups delving into topics that had emerged during the Workshop. Four working groups were created, with four to seven participants each, and each group presented a short report at the end of the session. Four themes were discussed: enhancing TEI from documents to data; ontologies for the Humanities; tools and infrastructures; and textual criticism. All of these themes are represented in this volume. The Workshop would not have been of such high quality without the support of the members of its scientific committee: Gioele Barabucci, Fabio Ciotti, Claire Clivaz, Marion Rivoal, Greta Franzini, Simon Gabay, Daniel Maggetti, Frederike Neuber, Elena Pierazzo, Davide Picca, Michael Piotrowski, Matteo Romanello, Maïeul Rouquette, Elena Spadini, Francesca Tomasi, Aris Xanthos – and, of course, the support of all the colleagues and administrative staff in Lausanne, who helped the Workshop to become a reality. The final versions of these papers underwent a single-blind peer review process. We want to thank the reviewers: Helena Bermudez Sabel, Arianna Ciula, Marilena Daquino, Richard Hadden, Daniel Jeller, Tiziana Mancinelli, Davide Picca, Michael Piotrowski, Patrick Sahle, Raffaele Viglianti, Joris van Zundert, and others who preferred not to be named personally. Your input enhanced the quality of the volume significantly! It is sad news that Hans Cools passed away during the production of the volume. We are proud to document a recent state of his work and will miss him and his ability to implement the vision of a digital scholarly edition based on graph data-models and semantic web technologies. The production of the volume would not have been possible without the thorough copy-editing and proof reading by Lucy Emmerson and the support of the IDE team, in particular Bernhard Assmann, the TeX-master himself. This volume is sponsored by the University of Bologna and by the University of Lausanne. Bologna, Lausanne, Graz, July 2021 Francesca Tomasi, Elena Spadini, Georg Vogele

    The Power of OCHRE’s Highly Atomic Graph Database Model for the Creation and Curation of Digital Text Editions

    Get PDF
    The Online Cultural and Historical Research Environment (OCHRE) is a research database platform that provides a suite of tools to aid in the curation of digital text editions. The power and flexibility of the OCHRE system is predicated on the underlying data model, which is constructed as a graph database. OCHRE is based on a semi-structured, item-based data model where data are atomized into granular items and organized through hierarchical arrangement and cross-cutting links. In this paper, we describe OCHRE’s graph data model and demonstrate how this approach revolutionizes digital philology

    New Digital Tools for a New Critical Edition of the Hebrew Bible

    No full text
    This article describes the digital edition of the Hebrew Bible: A Critical Edition (HBCE), which is being produced as part of a project called Critical Editions for Digital Analysis and Research (CEDAR) at the University of Chicago. We first discuss the goals of the HBCE and its requirements for a digital edition. We then turn to the CEDAR project and the advances it offers, both theoretical and technological. Finally, we present an illustration of how a reader might use the digital HBCE to interact with the biblical text in innovative ways
    corecore